Build frontend dist in Docker image for production deployment#13
Conversation
Reviewer's GuideMulti-stage Dockerfile introduced to build the frontend and include its dist/ assets in the production image, plus README documentation for Docker-based production deployment and required env vars. Flow diagram for Docker build and run process with environment and volumeflowchart TD
start["Start"] --> build_cmd["Run docker build -t spectracleanseai:latest ."]
build_cmd --> builder_stage["Builder stage
base image node:18-bookworm-slim
WORKDIR /app
npm ci
npm run build"]
builder_stage --> runtime_stage["Runtime stage
base image node:18-bookworm-slim
ENV NODE_ENV=production
apt-get install perl
npm ci --omit=dev
COPY server.js
COPY dist from builder
mkdir /app/uploads /data
chown appuser:appgroup
USER appuser
EXPOSE 3001
CMD npm start"]
runtime_stage --> image_built["Image spectracleanseai:latest built"]
image_built --> run_cmd["Run docker run --rm -p 3001:3001 \
-e NODE_ENV=production \
-e JWT_SECRET \
-e STRIPE_SECRET_KEY \
-e STRIPE_WEBHOOK_SECRET \
-e STRIPE_CREATOR_PRICE_ID \
-e STRIPE_STUDIO_PRICE_ID \
-e GEMINI_API_KEY \
-e FRONTEND_URL \
-e DB_PATH=/data/spectra.db \
-v spectracleanse_data:/data \
spectracleanseai:latest"]
run_cmd --> container_running["Container running
server.js serves API and dist SPA on port 3001
SQLite uses /data via spectracleanse_data volume"]
container_running --> end_node["Production app serving requests"]
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Hey - I've found 2 issues
Prompt for AI Agents
Please address the comments from this code review:
## Individual Comments
### Comment 1
<location path="Dockerfile" line_range="10-19" />
<code_context>
-# ExifTool requires Perl, which is not included in Alpine by default
-RUN apk add --no-cache perl
+COPY package.json package-lock.json ./
+RUN npm ci
-# Create a non-root user to run the process
-RUN addgroup -S appgroup && adduser -S appuser -G appgroup
+COPY . .
+RUN npm run build
+FROM node:18-bookworm-slim AS runtime
+ENV NODE_ENV=production
WORKDIR /app
-# Install dependencies first so Docker can cache this layer
-COPY package*.json ./
-RUN npm ci --omit=dev
+# exiftool-vendored requires Perl at runtime
+RUN apt-get update \
+ && apt-get install -y --no-install-recommends perl \
+ && rm -rf /var/lib/apt/lists/*
+
+# Create non-root user
+RUN groupadd --system appgroup && useradd --system --gid appgroup appuser
+
+COPY package.json package-lock.json ./
+RUN npm ci --omit=dev \
+ && npm cache clean --force
</code_context>
<issue_to_address>
**suggestion (performance):** Consider reusing node_modules from the builder stage to avoid a second full npm install
Dependencies are currently installed twice (`npm ci` in the builder and `npm ci --omit=dev` in the runtime). To reduce build time and network usage, you could copy `node_modules` from the builder into the runtime image and then run `npm prune --production` (or equivalent) there, preserving the multi-stage setup without a second full install.
Suggested implementation:
```
# SpectraCleanse AI – Production Dockerfile (builds frontend + backend runtime)
# ─────────────────────────────────────────────────────────────────────────────
FROM node:18-bookworm-slim AS builder
WORKDIR /app
# Install full dependency tree (including devDependencies) for build
COPY package.json package-lock.json ./
RUN npm ci
# Copy source and build
COPY . .
RUN npm run build
```
To fully implement reusing `node_modules` from the builder in the runtime image, you should also:
1. In the `runtime` stage (the `FROM node:18-bookworm-slim AS runtime` section), replace the second `npm ci --omit=dev` with copying `node_modules` from the builder and pruning dev dependencies, e.g.:
```dockerfile
# Instead of:
COPY package.json package-lock.json ./
RUN npm ci --omit=dev \
&& npm cache clean --force
# Use:
COPY package.json package-lock.json ./
COPY --from=builder /app/node_modules ./node_modules
RUN npm prune --production \
&& npm cache clean --force
```
2. Ensure you copy the built artifacts from the builder into the runtime image (if not already present), for example:
```dockerfile
COPY --from=builder /app/dist ./dist
```
3. Keep the existing Perl installation and non-root user setup in the runtime stage as-is, so that `exiftool-vendored` continues to work and the container still runs as `appuser`.
Adjust paths like `/app/dist` or the final `CMD` to match your existing build output and startup command.
</issue_to_address>
### Comment 2
<location path="README.md" line_range="81" />
<code_context>
+### Required production environment variables
+
+- `NODE_ENV=production`
+- `JWT_SECRET`
+- `STRIPE_SECRET_KEY`
+- `STRIPE_WEBHOOK_SECRET`
</code_context>
<issue_to_address>
**🚨 suggestion (security):** Clarify that `JWT_SECRET` should be a strong, unique secret in production.
Add a brief note to this bullet that `JWT_SECRET` must be a strong, unique value and not reused across environments to avoid weak production configurations.
```suggestion
- `JWT_SECRET` – a strong, unique secret value; **do not** reuse across environments (e.g., dev/staging/production)
```
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| COPY package.json package-lock.json ./ | ||
| RUN npm ci | ||
|
|
||
| # Create a non-root user to run the process | ||
| RUN addgroup -S appgroup && adduser -S appuser -G appgroup | ||
| COPY . . | ||
| RUN npm run build | ||
|
|
||
| FROM node:18-bookworm-slim AS runtime | ||
| ENV NODE_ENV=production | ||
| WORKDIR /app | ||
|
|
There was a problem hiding this comment.
suggestion (performance): Consider reusing node_modules from the builder stage to avoid a second full npm install
Dependencies are currently installed twice (npm ci in the builder and npm ci --omit=dev in the runtime). To reduce build time and network usage, you could copy node_modules from the builder into the runtime image and then run npm prune --production (or equivalent) there, preserving the multi-stage setup without a second full install.
Suggested implementation:
# SpectraCleanse AI – Production Dockerfile (builds frontend + backend runtime)
# ─────────────────────────────────────────────────────────────────────────────
FROM node:18-bookworm-slim AS builder
WORKDIR /app
# Install full dependency tree (including devDependencies) for build
COPY package.json package-lock.json ./
RUN npm ci
# Copy source and build
COPY . .
RUN npm run build
To fully implement reusing node_modules from the builder in the runtime image, you should also:
-
In the
runtimestage (theFROM node:18-bookworm-slim AS runtimesection), replace the secondnpm ci --omit=devwith copyingnode_modulesfrom the builder and pruning dev dependencies, e.g.:# Instead of: COPY package.json package-lock.json ./ RUN npm ci --omit=dev \ && npm cache clean --force # Use: COPY package.json package-lock.json ./ COPY --from=builder /app/node_modules ./node_modules RUN npm prune --production \ && npm cache clean --force
-
Ensure you copy the built artifacts from the builder into the runtime image (if not already present), for example:
COPY --from=builder /app/dist ./dist -
Keep the existing Perl installation and non-root user setup in the runtime stage as-is, so that
exiftool-vendoredcontinues to work and the container still runs asappuser.
Adjust paths like /app/dist or the final CMD to match your existing build output and startup command.
| ### Required production environment variables | ||
|
|
||
| - `NODE_ENV=production` | ||
| - `JWT_SECRET` |
There was a problem hiding this comment.
🚨 suggestion (security): Clarify that JWT_SECRET should be a strong, unique secret in production.
Add a brief note to this bullet that JWT_SECRET must be a strong, unique value and not reused across environments to avoid weak production configurations.
| - `JWT_SECRET` | |
| - `JWT_SECRET` – a strong, unique secret value; **do not** reuse across environments (e.g., dev/staging/production) |
Motivation
server.jsand production deps and therefore did not include the frontenddist/, causing the SPA UI to be absent in containers.server.jsservesdist/with an SPA fallback for non-/apiroutes, so the image must containdist/index.htmland assets for correct production behavior.Description
Dockerfileinto a multi-stage build with abuilderstage that runsnpm ciandnpm run build, and aruntimestage that installs runtime deps, installsperlforexiftool-vendored, creates a non-root user, copiesserver.js, and copiesdist/from the builder viaCOPY --from=builder /app/dist ./distso the final image contains the built frontend.NODE_ENV=production, keeps a non-rootappuser, creates runtime directories/app/uploadsand/data, exposes port3001, and starts the app withnpm start.README.mdwithdocker buildanddocker runexamples and a list of required production environment variables includingJWT_SECRET, Stripe vars,GEMINI_API_KEY,FRONTEND_URL,DB_PATH,NODE_ENV, and optionalREDIS_URL.Testing
npm installin the repo and it completed successfully.npm run buildand the frontend produceddist/(build completed and reporteddist/index.htmland asset files).docker build -t spectracleanseai:test .but Docker is unavailable in this environment (docker: command not found), so the image build could not be executed here; theDockerfilewas inspected to verifydist/is copied into the final stage.DockerfileandREADME.md.Codex Task
Summary by Sourcery
Introduce a multi-stage Docker-based production build that bundles the built frontend SPA into the runtime image and documents how to run it with required environment configuration.
Build:
Documentation: